65 research outputs found
Privacy-Aware Recommendation with Private-Attribute Protection using Adversarial Learning
Recommendation is one of the critical applications that helps users find
information relevant to their interests. However, a malicious attacker can
infer users' private information via recommendations. Prior work obfuscates
user-item data before sharing it with recommendation system. This approach does
not explicitly address the quality of recommendation while performing data
obfuscation. Moreover, it cannot protect users against private-attribute
inference attacks based on recommendations. This work is the first attempt to
build a Recommendation with Attribute Protection (RAP) model which
simultaneously recommends relevant items and counters private-attribute
inference attacks. The key idea of our approach is to formulate this problem as
an adversarial learning problem with two main components: the private attribute
inference attacker, and the Bayesian personalized recommender. The attacker
seeks to infer users' private-attribute information according to their items
list and recommendations. The recommender aims to extract users' interests
while employing the attacker to regularize the recommendation process.
Experiments show that the proposed model both preserves the quality of
recommendation service and protects users against private-attribute inference
attacks.Comment: The Thirteenth ACM International Conference on Web Search and Data
Mining (WSDM 2020
Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation
Many security and privacy problems can be modeled as a graph classification
problem, where nodes in the graph are classified by collective classification
simultaneously. State-of-the-art collective classification methods for such
graph-based security and privacy analytics follow the following paradigm:
assign weights to edges of the graph, iteratively propagate reputation scores
of nodes among the weighted graph, and use the final reputation scores to
classify nodes in the graph. The key challenge is to assign edge weights such
that an edge has a large weight if the two corresponding nodes have the same
label, and a small weight otherwise. Although collective classification has
been studied and applied for security and privacy problems for more than a
decade, how to address this challenge is still an open question. In this work,
we propose a novel collective classification framework to address this
long-standing challenge. We first formulate learning edge weights as an
optimization problem, which quantifies the goals about the final reputation
scores that we aim to achieve. However, it is computationally hard to solve the
optimization problem because the final reputation scores depend on the edge
weights in a very complex way. To address the computational challenge, we
propose to jointly learn the edge weights and propagate the reputation scores,
which is essentially an approximate solution to the optimization problem. We
compare our framework with state-of-the-art methods for graph-based security
and privacy analytics using four large-scale real-world datasets from various
application scenarios such as Sybil detection in social networks, fake review
detection in Yelp, and attribute inference attacks. Our results demonstrate
that our framework achieves higher accuracies than state-of-the-art methods
with an acceptable computational overhead.Comment: Network and Distributed System Security Symposium (NDSS), 2019.
Dataset link: http://gonglab.pratt.duke.edu/code-dat
- …